Correlations in Semantic Memory Ken McRae

نویسندگان

  • Ken McRae
  • George S. Cree
  • Robyn Westmacott
چکیده

The role of feature correlations in semantic memory is a central issue in conceptual representation. In two versions of the feature verification task, subjects were faster to verify that a feature () is part of a concept (grapefruit) if it is strongly rather than weakly intercorrelated with the other features of that concept. Contrasting interactions between feature correlations and SOA were found when the concept versus the feature was presented first. An attractor network model of word meaning that naturally learns and uses feature correlations predicted those interactions. This research provides further evidence that semantic memory includes implicitly-learned statistical knowledge of feature relationships, in contrast to theories such as spreading activation networks, in which feature correlations play no role. To appear in Special Issue of Canadian Journal of Experimental Psychology on Visual Word Recognition (December, 1999). This work was supported by NSERC grant RGPIN155704 to the first author and NSERC postgraduate fellowships to the second and third authors. Part of this research formed RW's University of Western Ontario undergraduate thesis. The authors thank Michael Masson and Jeff Elman for helpful comments on earlier drafts. Correspondence concerning this article should be addressed to Ken McRae, Department of Psychology, Social Science Centre, University of Western Ontario, London, Ontario, N6A 5C2. email: [email protected] feature correlations 2 Our environment is highly structured. In the domain of language processing, for instance, there are numerous sources of structure to which people are sensitive. Some words occur together more often than chance within sentences, as do some letters and phonemes within words. In this article, we focus on the fact that some semantic features tend to co-occur across objects and entities. For example, things in the world that also tend to and . Almost 25 years ago, Rosch and her colleagues noted that this environmental structure could be conceptualized in terms of co-occurring clusters of features (Rosch, 1978; Rosch & Mervis, 1975), an observation upon which the present research builds. Although it is uncontroversial that people learn and use statistical relationships between a concept and its features (e.g., between robin and , Smith & Medin, 1981), many researchers claim that semantic memory does not include knowledge of statistically-based feature correlations (Murphy & Wisniewski, 1989). In contrast, some research on incidental concept learning (Billman & Knutson, 1996) and the computation of word meaning (McRae, de Sa, & Seidenberg, 1997, hereafter MdSS) has found that people do indeed learn feature correlations. The primary goal of this article is to add to this debate by presenting further evidence for the role of feature correlations in lexically-based semantic tasks. The secondary goal is to use an attractor network to elucidate the principles that we feel are important for understanding the role of feature correlations in lexical processing, namely, implicit and incremental learning through experience with the environment, in conjunction with distributed semantic representations. 1 Throughout the article, concept names and examples of stimuli are italicized, whereas feature names are presented in . Arguments Against Feature Correlations We begin by noting that in this article, a "feature correlation" refers to a pair of features that tend to appear together in basiclevel concepts. For example, the features and are correlated because they co-occur in things like robins, sparrows, and hawks. The theoretical arguments that have led researchers to claim that people do not learn these correlations are based primarily on spreading activation networks and prototype models. From the standpoint of a spreading activation network, encoding feature correlations requires linking every feature with every other feature, weighted by the degree of correlation (Smith & Medin, 1981). Similarly, if a prototype is taken as a list of features, feature correlations could be instantiated as direct links between them. However, the process of linking features is considered problematic. Because there are a huge number of possible feature pairs to consider in the world, the task of constructing links between each correlated pair is viewed as computationally intractable. Some researchers have dealt with this problem by claiming that such a link is constructed only if the correlation is explicitly noticed (Murphy & Wisniewski, 1989), and that a relationship between two features would be noticed only if a person previously possessed an underlying theory for why the features might co-occur; for example, that wings are used for flying (Murphy & Medin, 1985). This suggests that the encoding of a feature correlation is an explicit and rare event. Empirically, two studies have failed to find robust effects of feature correlations (Malt & Smith, 1984; Murphy & Wisniewski, 1989). Using an intentional category learning task, Murphy and Wisniewski found no evidence that subjects based categorization decisions or typicality judgements on statistical relations among feature correlations 3 features that they would not have expected a priori to be correlated (e.g., and ). They concluded that "the pre-existing connection of features appears to be a necessary requirement for noticing correlations (with these types of categories and procedures, at least)" (Murphy & Wisniewski, 1989, p. 40). These arguments and empirical results have led to the claim that "interproperty relationships are outside the boundary conditions of almost all current categorization models (including prototype, exemplar, and rulebased models). Therefore, these models currently have limited generality, and this limitation is most evident where one might most want to generalize meaningful stimuli" (Medin & Coley, 1998, p. 417). Incidental Learning of Feature Correlations The null effects of feature correlations appear to be caused by the confluence of a number of factors, including the amount of experience a person has with exemplars, the amount of structure in the domain being learned, and whether learning occurs incidentally or intentionally. In category learning experiments that have found null effects such as Murphy and Wisniewski (1989), subjects were given relatively little experience with the exemplars from which this information had to be extracted. In contrast, the present experiments tapped knowledge that accrued over approximately 20 years of experience with objects and entities from various basic-level categories. Second, experiments showing null effects have used impoverished stimuli both in terms of the number of training exemplars and their complexity, whereas natural concept learning takes place in a complex world with rich structure. The empirical and modeling work of Billman and her colleagues has shown that complexity assists rather than impairs learning, in that a correlation between two features is easier to learn if it is part of a coherent system of correlations (Billman, 1989; Billman & Heit, 1988; Billman & Knutson, 1996). Finally, effects of feature correlations tend to show themselves in concept learning tasks that promote incidental rather than intentional learning (Wattenmaker, 1991). In a typical intentional learning categorization experiment, subjects are presented with a series of exemplars, one at a time, and are asked to place each in one of two categories. Subjects receive immediate feedback (even on the first trial), with this cycle continuing until performance reaches a predetermined criterion. However, these experiments bear little resemblance to natural concept learning, which is better characterized as incidental or observational learning. We learn by observing things, using them, and interacting with them in varied contexts. These experiences provide rich feedback that focuses the learner on more diverse and integrative aspects of the stimuli than does a task in which someone is asked to find the cue that enables distinguishing between two categories. A number of incidental learning studies are consistent with this claim (Billman & Knutson, 1996; Clapper & Bower, 1991; Kaiser & Proffitt, 1984; Reber, 1989; Wattenmaker, 1993). For example, experiments using habituation paradigms have demonstrated that 10-month old infants are sensitive to correlations among the visual features of objects and pictures, and that these correlations appear to play an important role in categorization and concept learning (Younger, 1990; Younger & Cohen, 1983, 1986). Thus, it may be the case that the environment's structure is captured by an incremental, implicit learning mechanism (Holyoak & Spellman, 1993). Attractor networks are useful as examples of learning models that encode and use this type of covariation information, feature correlations 4 and can be applied to semantic processing. Attractor networks can be viewed as instantiations and extensions of prototype models. These networks are interactive, parallel processing models in which distributed representations are used for constructing stable states in a multidimensional state space. Word meaning can be represented as patterns of activation across a set of semantic feature units, so that concepts are not accessed directly from separate memory locations as discrete, local units, but are computed on-line as unique patterns of activation. Such models are particularly well-suited for learning the structure in a set of training patterns. For example, pairs of features that co-occur in concepts on which a model is trained will have the weight between their units strengthened, and this will influence the trajectory that the model follows through state space as it settles to an attractor state representing the meaning of a word (MdSS). On-line Lexical Processing MdSS conducted the first study of the role of feature correlations in computing word meaning. They used semantic feature production norms to construct representations for 190 living and nonliving thing concepts in terms of individual and correlated features. MdSS showed that priming effects for pairs of living-things (e.g., eagle hawk) were predicted by similarity in terms of correlated feature pairs, but not in terms of individual features. In contrast, priming effects for nonliving thing pairs (e.g., truck van) were predicted by similarity in terms of individual features, whereas correlated feature pairs did not predict residual variation. This difference was attributed to the fact that nonliving thing concepts possess, on average, relatively fewer correlated features than do living things (Keil, 1989), thus providing less opportunity to observe their influence. MdSS also conducted a feature verification task ("Is this feature reasonably true of this concept?") in which they found that the degree to which a specific feature was correlated with the other features of a concept was the best predictor of verification latencies. A Hopfield (1982, 1984) attractor network provided mechanistic accounts of both experiments. The purpose of the present research is to provide further evidence for the role of feature correlations in the computation of word meaning. Experiment 1 is an extension of the MdSS feature verification task that includes two important changes. First, it involves more thorough equating of possible confounding variables. Second, an SOA manipulation is used to test the influence of feature correlations over an extended timecourse. In Experiment 2, the feature name was presented prior to the concept name, thus demanding a different view of the underlying computations, and leading to different model predictions and human results. In fact, the simulations predict contrasting interactions between SOA and the influence of feature correlations, and the human data bear these out.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modeling Property Intercorrelations in Conceptual Memory

Behavioral experiments have demonstrated that people encode knowledge of correlations among semantic properties of entities and that this knowledge influences performance on semantic tasks (McRae, 1992; McRae, de Sa, & Seidenberg, 1993). Independently, in connectionist theory, it has been claimed that relationships among semantic properties may provide structure that is required for the relativ...

متن کامل

The Role of Correlated Properties in Accessing Conceptual Memory

A fundamental question in research on conceptual structure concerns how information is represented in memory and used in tasks such as recognizing words. The present research focused on the role of correlations among semantic properties in conceptual memory. Norms were collected for 190 entities from 10 categories. Property intercorrelations influenced people's performance in both a property ve...

متن کامل

Semantic feature production norms for a large set of living and nonliving things.

Semantic features have provided insight into numerous behavioral phenomena concerning concepts, categorization, and semantic memory in adults, children, and neuropsychological populations. Numerous theories and models in these areas are based on representations and computations involving semantic features. Consequently, empirically derived semantic feature production norms have played, and cont...

متن کامل

Shared Features Dominate Semantic Richness Effects for Concrete Concepts.

When asked to list semantic features for concrete concepts, participants list many features for some concepts and few for others. Concepts with many semantic features are processed faster in lexical and semantic decision tasks (Pexman, Holyk, & Monfils, 2003; Pexman, Lupker, & Hino, 2002). Using both lexical and concreteness decision tasks, we provided further insight into these number-of-featu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999